1,159 research outputs found

    Combining Models of Approximation with Partial Learning

    Full text link
    In Gold's framework of inductive inference, the model of partial learning requires the learner to output exactly one correct index for the target object and only the target object infinitely often. Since infinitely many of the learner's hypotheses may be incorrect, it is not obvious whether a partial learner can be modifed to "approximate" the target object. Fulk and Jain (Approximate inference and scientific method. Information and Computation 114(2):179--191, 1994) introduced a model of approximate learning of recursive functions. The present work extends their research and solves an open problem of Fulk and Jain by showing that there is a learner which approximates and partially identifies every recursive function by outputting a sequence of hypotheses which, in addition, are also almost all finite variants of the target function. The subsequent study is dedicated to the question how these findings generalise to the learning of r.e. languages from positive data. Here three variants of approximate learning will be introduced and investigated with respect to the question whether they can be combined with partial learning. Following the line of Fulk and Jain's research, further investigations provide conditions under which partial language learners can eventually output only finite variants of the target language. The combinabilities of other partial learning criteria will also be briefly studied.Comment: 28 page

    Classifying the Arithmetical Complexity of Teaching Models

    Full text link
    This paper classifies the complexity of various teaching models by their position in the arithmetical hierarchy. In particular, we determine the arithmetical complexity of the index sets of the following classes: (1) the class of uniformly r.e. families with finite teaching dimension, and (2) the class of uniformly r.e. families with finite positive recursive teaching dimension witnessed by a uniformly r.e. teaching sequence. We also derive the arithmetical complexity of several other decision problems in teaching, such as the problem of deciding, given an effective coding {L0,L1,L2,}\{\mathcal L_0,\mathcal L_1,\mathcal L_2,\ldots\} of all uniformly r.e. families, any ee such that Le={L0e,L1e,,}\mathcal L_e = \{L^e_0,L^e_1,\ldots,\}, any ii and dd, whether or not the teaching dimension of LieL^e_i with respect to Le\mathcal L_e is upper bounded by dd.Comment: 15 pages in International Conference on Algorithmic Learning Theory, 201

    Learning Moore Machines from Input-Output Traces

    Full text link
    The problem of learning automata from example traces (but no equivalence or membership queries) is fundamental in automata learning theory and practice. In this paper we study this problem for finite state machines with inputs and outputs, and in particular for Moore machines. We develop three algorithms for solving this problem: (1) the PTAP algorithm, which transforms a set of input-output traces into an incomplete Moore machine and then completes the machine with self-loops; (2) the PRPNI algorithm, which uses the well-known RPNI algorithm for automata learning to learn a product of automata encoding a Moore machine; and (3) the MooreMI algorithm, which directly learns a Moore machine using PTAP extended with state merging. We prove that MooreMI has the fundamental identification in the limit property. We also compare the algorithms experimentally in terms of the size of the learned machine and several notions of accuracy, introduced in this paper. Finally, we compare with OSTIA, an algorithm that learns a more general class of transducers, and find that OSTIA generally does not learn a Moore machine, even when fed with a characteristic sample

    Minimal Synthesis of String To String Functions From Examples

    Full text link
    We study the problem of synthesizing string to string transformations from a set of input/output examples. The transformations we consider are expressed using deterministic finite automata (DFA) that read pairs of letters, one letter from the input and one from the output. The DFA corresponding to these transformations have additional constraints, ensuring that each input string is mapped to exactly one output string. We suggest that, given a set of input/output examples, the smallest DFA consistent with the examples is a good candidate for the transformation the user was expecting. We therefore study the problem of, given a set of examples, finding a minimal DFA consistent with the examples and satisfying the functionality and totality constraints mentioned above. We prove that, in general, this problem (the corresponding decision problem) is NP-complete. This is unlike the standard DFA minimization problem which can be solved in polynomial time. We provide several NP-hardness proofs that show the hardness of multiple (independent) variants of the problem. Finally, we propose an algorithm for finding the minimal DFA consistent with input/output examples, that uses a reduction to SMT solvers. We implemented the algorithm, and used it to evaluate the likelihood that the minimal DFA indeed corresponds to the DFA expected by the user.Comment: SYNT 201

    Mapping the disease-specific LupusQoL to the SF-6D

    Get PDF
    Purpose To derive a mapping algorithm to predict SF-6D utility scores from the non-preference-based LupusQoL and test the performance of the developed algorithm on a separate independent validation data set. Method LupusQoL and SF-6D data were collected from 320 patients with systemic lupus erythematosus (SLE) attending routine rheumatology outpatient appointments at seven centres in the UK. Ordinary least squares (OLS) regression was used to estimate models of increasing complexity in order to predict individuals’ SF-6D utility scores from their responses to the LupusQoL questionnaire. Model performance was judged on predictive ability through the size and pattern of prediction errors generated. The performance of the selected model was externally validated on an independent data set containing 113 female SLE patients who had again completed both the LupusQoL and SF-36 questionnaires. Results Four of the eight LupusQoL domains (physical health, pain, emotional health, and fatigue) were selected as dependent variables in the final model. Overall model fit was good, with R2 0.7219, MAE 0.0557, and RMSE 0.0706 when applied to the estimation data set, and R2 0.7431, MAE 0.0528, and RMSE 0.0663 when applied to the validation sample. Conclusion This study provides a method by which health state utility values can be estimated from patient responses to the non-preference-based LupusQoL, generalisable beyond the data set upon which it was estimated. Despite concerns over the use of OLS to develop mapping algorithms, we find this method to be suitable in this case due to the normality of the SF-6D data

    Grain Alignment in Molecular Clouds

    Full text link
    One of the most informative techniques of studying magnetic fields in molecular clouds is based on the use of starlight polarization and polarized emission arising from aligned dust. How reliable the interpretation of the polarization maps in terms of magnetic fields is the issue that the grain alignment theory addresses. I briefly review basic physical processes involved in grain alignment.Comment: 8 papes, 1 figures, to appear in Zermatt proceeding

    Reduced emissions from deforestation and forest degradation (REDD): a climate change mitigation strategy on a critical track

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Following recent discussions, there is hope that a mechanism for reduction of emissions from deforestation and forest degradation (REDD) will be agreed by the Parties of the UNFCCC at their 15th meeting in Copenhagen in 2009 as an eligible action to prevent climate changes and global warming in post-2012 commitment periods. Countries introducing a REDD-regime in order to generate benefits need to implement sound monitoring and reporting systems and specify the associated uncertainties. The principle of conservativeness addresses the problem of estimation errors and requests the reporting of reliable minimum estimates (RME). Here the potential to generate benefits from applying a REDD-regime is proposed with reference to sampling and non-sampling errors that influence the reliability of estimated activity data and emission factors.</p> <p>Results</p> <p>A framework for calculating carbon benefits by including assessment errors is developed. Theoretical, sample based considerations as well as a simulation study for five selected countries with low to high deforestation and degradation rates show that even small assessment errors (5% and less) may outweigh successful efforts to reduce deforestation and degradation.</p> <p>Conclusion</p> <p>The generation of benefits from REDD is possible only in situations where assessment errors are carefully controlled.</p
    corecore